Concept
computer architecture
Parents
Computer EngineeringComputer Science
Children
Approximate ComputingComputer SystemsDomain-specific ArchitecturesEdge ComputingEnergy Efficiency
166K
Publications
8.3M
Citations
251.9K
Authors
11.8K
Institutions
Table of Contents
In this section:
In this section:
In this section:
[1] Computer architecture - Wikipedia — In computer science and computer engineering, computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the description may include the instruction set architecture design, microarchitecture design, logic design, and implementation. Instruction set architecture (ISA): defines the machine code that a processor reads and acts upon as well as the word size, memory address modes, processor registers, and data type. Instruction set architecture[edit] An instruction set architecture (ISA) is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine.
[2] Computer Organization And Architecture — Computer ArchitectureComputer OrganizationComputer architecture is concerned with the way hardware components are connected together to form a computer system.Computer organization is concerned with the way architecture is implemented in terms of structure and behaviour of a computer system as seen by the user.It acts as an interface between computer hardware and software.Organization deals with the components and connections between various hardware components.Computer architecture help us to understand the functionalities of a system.Computer organization provide the details of the how exactly all the functional units in the system are arranged and interconnected.A programmer can view system architecture in terms of instruction set architecture (ISA), instruction format, addressing modes and registers.Computer organization is the actual implementation of the system architecture to achieve specified system performance.Computer architecture is the first step necessary while designing and building a computer.An organization is defined and done based on the system architecture.Computer architecture deals with the high level design issues and specifications.Computer organization basically deals with the low level system design issues.Architecture involves logic ( ISA instruction sets , addressing modes, data types , cache memory optimization ).Organization involves physical hardware components such as circuit design, adders, signals, and peripherals.
[3] Computer Architecture: Components, Types and Examples - Spiceworks — Computer Architecture: Components, Types and Examples - Spiceworks Computer architecture is defined as the end-to-end structure of a computer system that determines how its components interact with each other in helping execute the machine’s purpose (i.e., processing data). Computer architecture refers to the end-to-end structure of a computer system that determines how its components interact with each other in helping to execute the machine’s purpose (i.e., processing data), often avoiding any reference to the actual technical implementation. Support for temporary storage: Memory is also a vital component of computer architecture, with several types often present in a single system. In contrast to the von Neumann architecture, in which program instructions and data use the very same memory and pathways, this design separates the two.
[4] Computer architecture | Definition & Facts | Britannica — computer architecture, structure of a digital computer, encompassing the design and layout of its instruction set and storage registers. The architecture of a computer is chosen with regard to the types of programs that will be run on it (business, scientific, general-purpose, etc.). Its principal components or subsystems, each of which could be said to have an architecture of its own, are
[5] PDF — 1.1 Computer Organization and Architecture Computer Architecture refers to those attributes of a system that have a direct impact on the logical execution of a program. Examples: o the instruction set o the number of bits used to represent various data types o I/O mechanisms o memory addressing techniques
[6] Common Software Architecture Mistakes - Forbes — One of the most common mistakes in software architecture is not architecting the system to be observable and monitorable. This can lead to problems such as data loss, system crashes and security
[7] PDF — Challenge misconceptions Use formative questioning to uncover misconceptions and adapt teaching to address them as they occur. Awareness of common misconceptions alongside discussion, concept mapping, peer instruction, or simple quizzes can help identify areas of confusion. Unplug, unpack, repack Teach new concepts by first unpacking complex
[8] Misconceptions - Taking Learning Seriously — Changing students’ misconceptions involves revising their conceptual understanding, and not simply adding correct new information to their knowledge base. Moreover, when students predict outcomes, they may reveal misconceptions about the relevant concepts, which can help the teacher give immediate feedback and plan further instruction on the topic. Research has shown that in some cases refutational texts alone can prompt change in student misconceptions. However, students can have deeper misconceptions that hinder new learning and are resistant to traditional instruction. To help students revise their misconceptions, instructors should use concept tests to identify and assess their students’ misconceptions. consider using refutational teaching in which students read material and hear instructor explanations that directly challenge their misconceptions and clarify discipline-based ideas.
[11] 3 Modern Trade-offs in Embedded Systems Design | Beningo — However, more processing power means more power consumption. Power consumption is critical to battery-operated devices to maximize battery life and minimize product maintenance in the field. The faster the battery needs to be replaced, the higher the maintenance costs. Trading off performance and power consumption can be critical.
[14] [PDF] Energy-performance tradeoffs in processor architecture and ... — This paper applies an integrated architecture-circuit optimization framework to map out energy-performance trade-offs of several different high-level processor architectures, and shows how the joint architecture- Circuit space provides a trade-off range of approximately 6.5x in performance for 4x energy. Power consumption has become a major constraint in the design of processors today. To
[16] Semiconductors and Chip Design Trends and Technology — The semiconductor industry is undergoing rapid transformation, driven by innovations in next-generation processors, AI's influence on chip design, and the efforts of leading semiconductor companies.
[17] Transformational Trends & Impacts on the Semiconductor Industry — The emergence of AI servers and the expansion of cloud computing are pivotal forces reshaping the semiconductor industry. As enterprises increasingly adopt cloud-based infrastructures, there is a pressing need for advanced semiconductors to manage massive amounts of data efficiently and reliably. Cloud service providers are heavily investing in AI-enabled servers and innovative data centers, necessitating breakthroughs in semiconductor technology. Together, AI servers and cloud computing are set to redefine the future of the semiconductor industry. As AI, cloud computing, and memory chip innovations drive the next wave of transformation, companies must navigate an intricate landscape of technological advancements, energy considerations, and geopolitical factors. www.digitimes.com/news/a20241230PD222/semiconductor-industry-2025-ai-server-memory-chips-trump-2.0.html “2025 Demand: Semiconductor Industry Market, Cloud, AI.” Digitimes, 2 Jan. 2025, www.digitimes.com/news/a20250102PD214/2025-demand-semiconductor-industry-market-cloud-ai.html
[18] 7 Emerging Trends in Semiconductors that Hold Significant Potential — The semiconductor industry is on the brink of a revolution with emerging trends that promise to reshape the future of technology. Insights from a Founder and a Senior Product Manager highlight the move towards chiplet-based architecture and its potential to revolutionize the industry.
[21] Chapter 1: Computer Abstractions and Technology — The abstraction provides a standardized interface between hardware and low-level software. This abstraction is in contrast to the implementation, which is the hardware that carries out the architecture abstraction. Note that manufacturers often produce different implementations varying in cost and performance for the same architecture.
[43] PDF — Journal of Global Research in Computer Sciences e-ISSN: 2229-371X GRCS| Volume 14 | Issue 1 |March, 2023 10 History of computer architecture Computer architecture has its roots in the early days of computing, with the first electronic computers being developed in the 1940s. The development of the first electronic computers in the 1940s and 1950s marked a significant milestone in the field of computer architecture. The development of multicore processors, the use of GPUs for general-purpose computing, and the growing interest in quantum computing are just a few examples of the recent trends in computer architecture. Recent trends in computer architecture, such as the development of multicore processors and the use of GPUs for general-purpose computing, show that this field will continue to evolve and innovate in the years to come.
[44] PDF — Milestones in Computer Architecture 21 Mark I 1st programmable computer in US (1944) The machine was designed to produce ballistic "firing tables" replacing the "computer ladies". Characteristics: 5 tons, 800 Km of wire, 2.5 m tall and 15 m long, 5 hp electric motor. Milestones in Computer Architecture 22 Mark I
[45] The Evolution of Computer Architecture | by Olivia | Medium — The Evolution of Computer Architecture | by Olivia | Medium Innovations in semiconductor technology, such as reduced instruction set computing (RISC) architectures, further enhanced the speed and efficiency of computing devices. The evolution of computer architecture is a testament to human creativity, curiosity, and perseverance.From the bulky mainframes of yesteryear to the quantum computers of tomorrow, each milestone in computing history represents a leap forward in our quest to understand and harness the power of information technology. Whether it’s unlocking the mysteries of the universe through quantum computation or harnessing the power of AI to tackle pressing societal challenges, the journey of computer architecture reminds us that the only constant in technology is change.
[46] Exploring the History of Computer Architecture Evolution — This architecture was proposed in 1945 by von Neumann and his colleagues, and it's based on the idea that instructions and data are stored in the same memory. The Von Neumann architecture introduced the concept of stored-program computers, where both instructions and data are stored in the same memory, allowing for flexible program execution. The von Neumann architecture consists of a processor with an arithmetic and logic unit (ALU) and a control unit, a memory unit, connections for input/output devices, and a secondary storage for saving and backing up data. This architecture revolutionized computing by allowing instructions and data to be loaded into the same memory unit.
[50] Quantum vs Classical Computing: Key Differences - QuantumExplainer.com — The Evolution of Computing: From Classical to Quantum The shift from binary processing in classical computers to the multidimensional array of states in quantum computing has the potential to reshape the computational capabilities we have come to rely on, opening doors to new paradigms of data handling and problem-solving. As the landscape of technology continues to unfold, the data manipulation methods and computational models of both quantum and classical computing will define the new frontier. Quantum computers operate on quantum bits, or qubits, leveraging phenomena such as entanglement and superposition to perform calculations on a scale that classical systems cannot match. Q: How do data manipulation methods differ between quantum and classical computational models?
[87] Understanding Computer Architecture: Key Concepts Explained — Computer architecture involves several critical components that work together to ensure that data processing and computing tasks are efficiently carried out. Unlike the Von Neumann architecture, the Harvard architecture uses separate memory spaces for instructions and data. CISC architecture includes a large set of instructions, each capable of performing complex tasks. In parallel architecture, multiple processors work together to perform computing tasks simultaneously. Computer architecture is critical to the development and performance of computing systems. A well-designed architecture ensures efficient data processing, resource utilization, and system performance. Computer architecture is the backbone of all computing systems, determining how hardware and software interact to process data and perform tasks.
[92] Von Neumann Architecture vs. Harvard Architecture - Spiceworks — Chronologically following Harvard architecture, which separated memory and processing units, Von Neumann’s design significantly enhanced computer performance by efficiently storing and executing instructions. Von Neumann stores instructions and data in the same memory, simplifying the architecture. Harvard architecture’s fast and efficient access to both instructions and data makes it ideal for real-time applications in embedded systems. | Similar to Von Neumann architecture, Harvard architecture uses registers to store data and instruction addresses for faster processing. This segregation allows a computer system to execute instructions and access data concurrently, enhancing performance and efficiency.Unlike Von Neumann architecture, which uses a unified memory space for both data and instructions, Harvard architecture mitigates bottlenecks and boosts computing speed.
[93] Comparing Harvard Architecture vs von Neumann: Understanding the ... — Overall, the difference in instruction execution between the Harvard and von Neumann architectures lies in the memory organization and efficiency. The Harvard architecture's separate instruction and data memories enable simultaneous fetch and execution, enhancing performance.
[95] Harvard vs Von Neumann Architecture Explained - SoC — The key difference between Harvard and Von Neumann architectures is that Harvard architecture has physically separate storage and signal pathways for instructions and data, while Von Neumann architecture uses the same memory and pathways for both instructions and data. In Von Neumann architecture, data and instructions are stored in the same memory and travel across the same pathways to the CPU. MicroBlaze – Soft processor core designed by Xilinx for FPGA and embedded systems uses Harvard architecture for instruction and data separation. This is done by using shared memory for both data and instructions, but with separate CPU caches and data pathways that emulate the separate memories of Harvard machines. In Harvard architectures, separate instruction and data caches are used. Harvard architecture physically separates storage and data pathways, while Von Neumann uses shared resources.
[98] Interconnection network organization and its impact on performance and ... — The memory module latency is fixed, whereas the memory waiting time is affected by memory cycle time and contention at a memory module. T a is the delay to receive and assemble the message header and the first word of the cache line for which the processor is waiting, depends on the channel width (flit size) as shown in the above equation.
[99] Addressing interconnect challenges for enhanced computing performance — The shrinking footprint of integrated circuits is now shifting the limits of performance from the transistors themselves to the interconnections between them. The resistance-capacitance delay from interconnects worsens with greater device density because interconnection paths lengthen, wires become narrower, and more types of connections are
[100] PDF — For several decades, high-performance computer systems have incor-porated a memory hierarchy . Because central processing unit (CPU) performance has risen much more rapidly than memory performance since the late 1970s, modern computer systems have an increasingly severe per-formance gap between CPU and memory. Figure 1 illustrates this trend,
[107] What are the most common system implementation mistakes? — There are many potential problems with system implementations: poor project management, poor planning, end-user resistance or lack of time to learn the new system, data management issues, false promises from vendors and consultants during the sales cycle, internal politics and more.
[121] History of Computer Architecture - csbranch.com — The history of computer architecture reflects a continuous evolution driven by technological advancements and changing needs. From mechanical calculators to today's high-performance computing systems, each era has built upon the innovations of the past. As technology continues to advance, future developments in computer architecture will
[124] The Evolution of Computer Architecture | by Olivia | Medium — The Evolution of Computer Architecture | by Olivia | Medium Innovations in semiconductor technology, such as reduced instruction set computing (RISC) architectures, further enhanced the speed and efficiency of computing devices. The evolution of computer architecture is a testament to human creativity, curiosity, and perseverance.From the bulky mainframes of yesteryear to the quantum computers of tomorrow, each milestone in computing history represents a leap forward in our quest to understand and harness the power of information technology. Whether it’s unlocking the mysteries of the universe through quantum computation or harnessing the power of AI to tackle pressing societal challenges, the journey of computer architecture reminds us that the only constant in technology is change.
[129] hardware - von neumann vs harvard architecture - Stack Overflow — Well current CPU designs for PC's have both Harvard and Von Neumann elements (more Von Neumann though). ... The fundamental difference between Von Neumann architecture and Harvard architecture is that while in the Harvard architecture, instruction memory is distinct from data memory, in Von Neumann they are the same.
[130] Harvard Architecture vs. Von Neumann Architecture — Von Neumann Architecture, named after the renowned mathematician and computer scientist John von Neumann, is another widely used computer architecture. Unlike Harvard Architecture, Von Neumann Architecture uses a single memory unit to store both instructions and data.
[140] Harvard vs Von Neumann Architecture Explained - SoC — The key difference between Harvard and Von Neumann architectures is that Harvard architecture has physically separate storage and signal pathways for instructions and data, while Von Neumann architecture uses the same memory and pathways for both instructions and data. In Von Neumann architecture, data and instructions are stored in the same memory and travel across the same pathways to the CPU. MicroBlaze – Soft processor core designed by Xilinx for FPGA and embedded systems uses Harvard architecture for instruction and data separation. This is done by using shared memory for both data and instructions, but with separate CPU caches and data pathways that emulate the separate memories of Harvard machines. In Harvard architectures, separate instruction and data caches are used. Harvard architecture physically separates storage and data pathways, while Von Neumann uses shared resources.
[141] 10 Difference Between Von Neumann And Harvard Architecture — Stored-Program Concept: The Von Neumann architecture introduced the revolutionary idea of a “stored program.” In this concept, both the instructions that control the computer’s operations and the data it processes are stored in the same memory system. Unlike the Von Neumann architecture, where both data and instructions are stored in the same memory space, the Harvard architecture uses separate memory spaces for instructions and data. BASIS OF COMPARISONVON NEUMANN ARCHITECTUREHARVARD ARCHITECTUREDescriptionThe Von Neumann architecture is a theoretical design based on the stored-program computer concept.The Harvard architecture is a modern computer architecture based on the Harvard Mark I relay-based computer model.Memory SystemHas only one bus that is used for both instructions fetches and data transfers.Has separate memory space for instructions and data which physically separates signals and storage code and data memory.
[162] The Evolution of Computer Architecture | by Olivia | Medium — The Evolution of Computer Architecture | by Olivia | Medium Innovations in semiconductor technology, such as reduced instruction set computing (RISC) architectures, further enhanced the speed and efficiency of computing devices. The evolution of computer architecture is a testament to human creativity, curiosity, and perseverance.From the bulky mainframes of yesteryear to the quantum computers of tomorrow, each milestone in computing history represents a leap forward in our quest to understand and harness the power of information technology. Whether it’s unlocking the mysteries of the universe through quantum computation or harnessing the power of AI to tackle pressing societal challenges, the journey of computer architecture reminds us that the only constant in technology is change.
[163] PDF — Journal of Global Research in Computer Sciences e-ISSN: 2229-371X GRCS| Volume 14 | Issue 1 |March, 2023 10 History of computer architecture Computer architecture has its roots in the early days of computing, with the first electronic computers being developed in the 1940s. The development of the first electronic computers in the 1940s and 1950s marked a significant milestone in the field of computer architecture. The development of multicore processors, the use of GPUs for general-purpose computing, and the growing interest in quantum computing are just a few examples of the recent trends in computer architecture. Recent trends in computer architecture, such as the development of multicore processors and the use of GPUs for general-purpose computing, show that this field will continue to evolve and innovate in the years to come.
[167] Nvidia's Impact on Machine Learning Evolution — The importance of understanding the intersection of Nvidia and machine learning lies in various key aspects. Firstly, Nvidia's GPU architecture is particularly suited for the matrix and vector calculations that are core to machine learning algorithms.
[168] The role of GPU architecture in AI and machine learning - Telnyx — GPU architecture offers unmatched computational speed and efficiency, making it the backbone of many AI advancements. The foundational support of GPU architecture allows AI to tackle complex algorithms and vast datasets, accelerating the pace of innovation and enabling more sophisticated, real-time applications.
[170] Why GPUs Are the Powerhouse of AI: NVIDIA's Game-Changing Role in ... — Why GPUs Are the Powerhouse of AI: NVIDIA's Game-Changing Role in Machine Learning — TheDayAfterAI News Why GPUs Are the Powerhouse of AI: NVIDIA's Game-Changing Role in Machine Learning One of the most visible manifestations of GPUs' impact on AI is OpenAI’s ChatGPT, a large language model (LLM) that operates on thousands of NVIDIA GPUs. Serving over 100 million users, ChatGPT exemplifies how GPUs facilitate real-time generative AI services, delivering swift and accurate responses by leveraging the massive parallel processing power of NVIDIA’s hardware. Moreover, with over 40,000 companies utilizing NVIDIA GPUs for AI and accelerated computing, and a global community of 4 million developers, the ecosystem is poised for continued growth and diversification. Why GPUs Are the Powerhouse of AI: NVIDIA's Game-Changing Role in Machine Learning
[171] Future Of Gpu Architecture Insights - Restackio — The future of GPU architecture for AI is set to be defined by advancements in parallel processing, specialized hardware, machine learning integration, energy efficiency, and interconnect bandwidth. These trends will not only enhance the performance of AI applications but also ensure that GPU technology remains at the forefront of computing
[175] Optimizing Software for Multicore Arm Microcontrollers — Cache optimization: Efficient use of cache can significantly impact performance. Techniques include cache locking for critical sections and optimizing data structures for cache efficiency. Optimizing for Cache and Memory. Multicore processors often have complex cache hierarchies, and effective use of these caches is crucial for performance.
[176] Concurrency Challenges and Optimization Strategies in MultiCore ... — Multi-core architectures have become the cornerstone of modern computing, offering significant performance improvements through parallel processing. However, these benefits come with substantial challenges, particularly in managing concurrency. This paper explores the key challenges in concurrent programming on multi-core systems and presents a comprehensive overview of optimization strategies.
[177] The multicore architecture - ScienceDirect — The advantages of multicore architectures come at the expense of several challenges such as cache coherency and communication among the cores. Design issues encountered with the multicore architecture such as cache coherency, interconnection frameworks, and designing software for parallel execution are then examined. Architectural features that must be determined for a multicore design include the number of cores, heterogeneity/homogeneity of the cores, cache memory sharing and coherency, and the interactions among the cores. To take advantage of the capabilities of a multicore processor architecture, the software must be designed and developed for parallel execution. Due to the multicore architecture, methods have been introduced to exploit the potential parallelism in application programs to efficiently use each core
[181] Benefits Of Heterogeneous Computing In Ai | Restackio — Heterogeneous computing allows for a more tailored allocation of resources, enhancing the overall throughput of AI applications. For instance, the combination of CPUs and GPUs has emerged as a popular strategy to fully utilize the computational power of GPUs.
[204] Emerging Technologies and Their Impact on Computer Architecture — Emerging Technologies and Their Impact on Computer Architecture | by NRT0401 | Medium Emerging Technologies and Their Impact on Computer Architecture As we advance into an era characterized by artificial intelligence (AI), quantum computing, and the Internet of Things (IoT), understanding the implications of these technologies on computer architecture is crucial for both designers and users. This article examines key emerging technologies and their influence on computer architecture, exploring how they are shaping the future of computing. Artificial intelligence and machine learning have become integral components of modern computing, influencing both hardware and software design. The following are some ways AI impacts computer architecture: Neuromorphic Computing: This approach emulates the neural structures of the human brain, leading to architectures designed for more efficient learning and processing of information.
[206] Future Trends in Computer Architecture — PrepBytes Blog ONE-STOP RESOURCE FOR EVERYTHING RELATED TO CODING Register with PrepBytes This in-depth research examines the upcoming and ground-breaking developments in computer design that will alter the very core of computing. In turn, machine learning algorithms will increase significantly, opening the door for previously unimaginable developments in autonomous systems, robotics, and medical diagnostics. Below are some of the FAQs related to Future Trends in Computer Architecture: Heterogeneous architectures are expected to improve performance and energy efficiency by combining different types of processors, such as CPUs, GPUs, and accelerators, to leverage their strengths for specific tasks. Post navigation Previous Save my name, email, and website in this browser for the next time I comment. Integrated Services Digital Network (ISDN) VLAN ACL (VACL) in Computer Networks Related Post
[215] The Future of Digitalisation: Quantum and Neuromorphic Computing — Neuromorphic computing is another innovative technology. This is the concept of building a computer that mimics the way the brain works. Our brain is not very fast at completing mathematical operations, but thanks to the massive networking among neurons, it is extremely efficient at learning and identifying links between various observations.
[217] What Is Neuromorphic Computing? - IBM — What Is Neuromorphic Computing? What is neuromorphic computing? What is neuromorphic computing? Neuromorphic computing can act as a growth accelerator for AI, boost high-performance computing and serve as one of the building blocks of artificial superintelligence. These neurological and biological mechanisms are modeled in neuromorphic computing systems through spiking neural networks (SNNs). In neuromorphic computing, input signals are fed to a spiking neural network, which acts as the reservoir. On the other hand, neuromorphic computing systems both store and process data in individual neurons, resulting in lower latency and swifter computation compared to von Neumann architecture. As an adaptable technology, neuromorphic computing can be used to enhance a robot’s real-time learning and decision-making skills, helping it better recognize objects, navigate intricate factory layouts and operate faster in an assembly line.
[236] Computer architecture: Components, Types, Applications & Challenges — Computer architecture: Components, Types, Applications & Challenges - ParikshaPatr Home School College Competitive Exams Interview Computer architecture: Components, Types, Applications & Challenges What is Computer Architecture? Computer architecture determines how these components interact to perform computational tasks. Specialized Applications: Custom architectures, like GPUs or neural processing units (NPUs), are tailored for specific tasks, such as graphics rendering or AI computations. Components of Computer Architecture Types of Computer Architecture Von Neumann Architecture is the most common architecture, where data and instructions share the same memory and bus. Instruction Set Architecture (ISA): Defines the set of instructions a processor can execute. Computer architecture plays a pivotal role in shaping the functionality and performance of computing systems. 9 Types of Computing Architecture
[238] How Computer Architecture Is Characterized — The importance of computer architecture can be seen in modern computing. With the widespread use of devices such as laptops, tablets and smartphones, it is essential that computer architecture is designed to maximize their efficiency and performance. Moreover, the growing demand for cloud computing requires efficient computer architectures.
[239] Why Is Computer Architecture Important — Without efficient computer architectures, modern computing wouldn't be possible, and software engineers would be unable to build optimized and resilient programs. As such, computer architecture is an essential component of computing, and its importance cannot be overstated.
[245] Microarchitecture and Instruction Set Architecture — In this article, we look at what an Instruction Set Architecture (ISA) is and what is the difference between an 'ISA' and Microarchitecture.An ISA is defined as the design of a computer from the Programmer's Perspective.. This basically means that an ISA describes the design of a Computer in terms of the basic operations it must support. The ISA is not concerned with the implementation
[246] From ISA to Execution: Understanding Microarchitecture — Microarchitecture, also known as computer organization, refers to the way a given instruction set architecture (ISA) is implemented in a particular processor.
[247] Instruction Set Architectures and Performance - Olivia A. Gallucci — Instruction Set Architectures and Performance - Olivia A. The two extremes are Complex Instruction Set Computers (CISC) and Reduced Instruction Set Computers (RISC); note that in this instance ISA and Instruction Set Computer (ISC) are synonymous. As a result, CISC architectures have higher CPI cycles because every instruction does more work. In this example, the prefabs represent how CISC architectures have fewer instructions, but the instructions are complex (i.e., heavy) and complete more of a task (i.e., adding a prefab to build the house). Instruction may take more than one clock cycle to execute. Generally, instruction set architectures can affect program performance; CISC and RISC illustrate this concept well.
[259] Is a Bigger Cache Better? Understanding the Impact of Cache Size on ... — The impact of cache size on system performance varies depending on the type of application being run. For example, applications that rely heavily on data access, such as databases and scientific simulations, can benefit significantly from a larger cache.
[261] Does larger cache size always lead to improved performance? — There is a tradeoff between cache size and hit rate on one side and read latency with power consumption on another. So the answer to your first question is: technically (probably) possible, but unlikely to make sense, since L3 cache in modern CPUs with size of just a few MBs has read latency of about dozens of cycles. Performance depends more on memory access pattern than on cache size. More
[270] An Overview of Architecture-Level Power- and Energy-Efficient Design ... — Power dissipation and energy consumption became the primary design constraint for almost all computer systems in the last 15 years. Both computer architects and circuit designers intent to reduce power and energy (without a performance degradation) at all design levels, as it is currently the main obstacle to continue with further scaling according to Moore's law.
[271] Evolution, Challenges, and Optimization in Computer Architecture: The ... — Each type of processor in a heterogeneous computing architecture is optimized for specific tasks, leading to improved performance and reduced power consumption for workloads that can be parallelized across these diverse cores while also allowing efficient communication and resource sharing between all components. The systolic array architecture allows for high computational efficiency and throughput by enabling multiple operations to be performed simultaneously. This understanding allows us to select the most efficient architecture for a given 9 TABLE I: Comparison of TPU, STPU, and FlexTPU Architectures Features TPU STPU FlexTPU Processing Elements MAC units MAC units MAC units Computation Model DataFlow DataFlow DataFlow Optimized for Dense Matrix SpMM SpMV computational task.
[272] (PDF) Energy Efficient Computing Systems: Architectures, Abstractions ... — performance-cost-energy tradeo s for well de ned tasks. Systems also evolved from multi-chip packages to system-on-a-chip (SOC) architectures with accelerators like GPUs, imaging, AI/deep
[273] A Survey of Memory-Centric Energy Efficient Computer Architecture — Energy efficient architecture is essential to improve both the performance and power consumption of a computer system. However, modern computers suffer from the severe "memory wall" problem due to the significant performance gap between the processor technology and the memory technology. Thus, the computer architecture community is evolving from compute-centric to memory-centric designs to
[276] Evolution, Challenges, and Optimization in Computer Architecture: The ... — Change to arXiv's privacy policy The arXiv Privacy Policy has changed. By continuing to use arxiv.org, you are agreeing to the privacy policy. arXiv:2412.19234 Help | Advanced Search arXiv identifier arXiv author ID Help pages This paper provides a comprehensive study of this evolution, highlighting the challenges and key advancements in the transition from single-core to multi-core processors. Ultimately, this study emphasizes the role of reconfigurable systems in overcoming current architectural challenges and driving future advancements in computational efficiency. Subjects: Hardware Architecture (cs.AR) Cite as: arXiv:2412.19234 [cs.AR] (or arXiv:2412.19234v1 [cs.AR] for this version) https://doi.org/10.48550/arXiv.2412.19234 From: Jefferson Ederhion [view email] Access Paper: References & Citations Bibliographic and Citation Tools Bibliographic Explorer Toggle Connected Papers Toggle Which authors of this paper are endorsers? arXiv Operational Status
[277] Evolution, Challenges, and Optimization in Computer Architecture: The ... — Each type of processor in a heterogeneous computing architecture is optimized for specific tasks, leading to improved performance and reduced power consumption for workloads that can be parallelized across these diverse cores while also allowing efficient communication and resource sharing between all components. The systolic array architecture allows for high computational efficiency and throughput by enabling multiple operations to be performed simultaneously. This understanding allows us to select the most efficient architecture for a given 9 TABLE I: Comparison of TPU, STPU, and FlexTPU Architectures Features TPU STPU FlexTPU Processing Elements MAC units MAC units MAC units Computation Model DataFlow DataFlow DataFlow Optimized for Dense Matrix SpMM SpMV computational task.
[278] PDF — Our challenge as computer architects is to deliver end-to-end perfor-mance growth at historical levels in the pres-ence of technology discontinuities. We can address this challenge by focusing on
[279] Future of Computer Architecture: Challenges and Opportunities — Future of Computer Architecture: Challenges and Opportunities – csbranch.com Traditionally, computer architecture involved designing processors, memory systems, and input/output devices to optimize performance, efficiency, and cost. Future architectures must explore alternative approaches, such as new instruction sets or parallel processing techniques, to continue improving performance. Future architectures need to address this memory bottleneck by developing faster and more efficient memory hierarchies, such as new types of non-volatile memory or advanced cache systems. Future architectures will continue to advance in this area, enabling more efficient and powerful AI applications. By understanding and addressing the challenges and opportunities in computer architecture, we can pave the way for a future where computing systems continue to evolve and drive innovation. Previous Article Previous Post: Energy-Efficient Computing Next Article Next Post: COMPUTER ARCHITECTURE
[281] Real-Time Task Schedulers for a High-Performance Multi-Core System — Abstract This paper proposes a multi-objective task scheduling algorithm for high performance real-time computing systems designed by the Multicore processor. Most real-time systems are battery powered and operate many complex mechanisms. In such a system, it is necessary to consider the energy consumption, core/processor utilization and deadlock miss rate to improve performance. In order to
[282] Performance Analysis of Real-Time Scheduling Algorithms — Scheduling algorithms are very important to manage the scheduling of tasks in real-time systems. In this paper we give an overview on the real-time scheduling techniques for uniprocessors and multiprocessors, then we present a comparison between the multiprocessor scheduling algorithms which are classified into partitioning and global
[287] Managing performance-reliability tradeoffs in multicore processors ... — There is a fundamental tradeoff between processor performance and lifetime reliability. High throughput operations increase power and heat dissipations that have adverse impacts on lifetime reliability. On the contrary, lifetime reliability favors low utilization to reduce stresses and avoid failures. A key challenge of understanding this tradeoff is in connecting application characteristics
[289] [1902.02343] Exploration of Performance and Energy Trade-offs for ... — Energy-efficiency has become a major challenge in modern computer systems. To address this challenge, candidate systems increasingly integrate heterogeneous cores in order to satisfy diverse computation requirements by selecting cores with suitable features. In particular, single-ISA heterogeneous multicore processors such as ARM big.LITTLE have become very attractive since they offer good
[291] Thermal Management and Power Optimization in Modern CPU and GPU ... — (PDF) Thermal Management and Power Optimization in Modern CPU and GPU Architectures Thermal Management and Power Optimization in Modern CPU and GPU Architectures As computational demands continue to rise, thermal management and power optimization have become critical concerns in the design of modern CPU and GPU architectures. We investigate dynamic voltage and frequency scaling (DVFS), advanced cooling solutions, and power gating mechanisms that reduce energy consumption without compromising performance. Emerging technologies, such as near-threshold voltage computing and thermal-aware design automation, are discussed for their potential to revolutionize power and thermal management. Our findings underscore the necessity of a multi-faceted approach, integrating both hardware and software innovations, to meet the growing power and thermal management needs of modern computational architectures. As computational demands continue to rise, thermal management and power optimization have